112 research outputs found

    The benefits of adversarial defense in generalization

    Get PDF
    Recent research has shown that models induced by machine learning, and in particular by deep learning, can be easily fooled by an adversary who carefully crafts imperceptible, at least from the human perspective, or physically plausible modifications of the input data. This discovery gave birth to a new field of research, the adversarial machine learning, where new methods of attacks and defense are developed continuously, mimicking what is happening from a long time in cybersecurity. In this paper we will show that the drawbacks of inducing models from data less prone to be misled can actually provide some benefits when it comes to assessing their generalization abilities. We will show these benefits both from a theoretical perspective, using state-of-the-art statistical learning theory, and both with practical examples

    Digital kernel perceptron

    Full text link

    Using K-Winner Machines for domain analysis

    No full text
    The K-WinnerMachine (KWM) model combines unsupervised with supervised training paradigms, and builds up a family of nested classifiers that differ in their expected generalization performances. A KWM allows members of the classifier family to reject a test pattern, and predicting the rejection rate is a crucial issue to the ultimate method effectiveness. The aspects involved by the analytical properties of the KWM first drive a theoretical analysis of the rejection performance. Then the paper shows that the KWM classification process can also be profitably used for domain inspection. Novel theorems connect the outputs of KWMs directly to the class-separating boundaries in the data space. Empirical evidence eventually supports the intuitive result that smaller confidence values characterize boundary regions

    Empirical Measure of Multiclass Generalization Performance: the K-Winner Machine Case

    No full text

    Distribution-dependent weighted union bound

    No full text
    In this paper, we deal with the classical Statistical Learning Theory’s problem of bounding, with high probability, the true risk R(h) of a hypothesis h chosen from a set H of m hypotheses. The Union Bound (UB) allows one to state that P{L(R(h), δqh) ≤ R(h) ≤ U((R(h)), δqh} ≥ 1-δ where R(h) is the empirical errors, if it is possible to prove that P{R(h) ≥ L(R(h), δ)} ≥ 1-δ and P{R(h) ≤ U(R(h), δ)}≥ 1-δ, when h, qh, and ph are chosen before seeing the data such that qh, ph∈[0, 1] and ∑h∈H(qh+ph)=1. If no a priori information is available qh and ph are set to 1/2m, namely equally distributed. This approach gives poor results since, as a matter of fact, a learning procedure targets just particular hypotheses, namely hypotheses with small empirical error, disregarding the others. In this work we set the qh and ph in a distribution-dependent way increasing the probability of being chosen to function with small true risk. We will call this proposal Distribution-Dependent Weighted UB (DDWUB) and we will retrieve the sufficient conditions on the choice of qh and ph that state that DDWUB outperforms or, in the worst case, degenerates into UB. Furthermore, theoretical and numerical results will show the applicability, the validity, and the potentiality of DDWUB

    Worst case analysis of weight inaccuracy effects in multilayer perceptrons

    No full text
    Abstract—We derive here a new method for the analysis of weight quantization effects in multilayer perceptrons based on the application of interval arithmetic. Differently from previous results, we find worst case bounds on the errors due to weight quantization, that are valid for every distribution of the input or weight values. Given a trained network, our method allows to easily compute the minimum number of bits needed to encode its weights. Index Terms — Interval arithmetic, multilayer perceptron, quantization, robustness. I

    K-Winner Machines for pattern classification

    No full text

    IAVQ-interval-arithmetic vector quantization for image compression

    No full text
    • …
    corecore